Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
Int J Retina Vitreous ; 10(1): 37, 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38671486

ABSTRACT

BACKGROUND: Code-free deep learning (CFDL) is a novel tool in artificial intelligence (AI). This study directly compared the discriminative performance of CFDL models designed by ophthalmologists without coding experience against bespoke models designed by AI experts in detecting retinal pathologies from optical coherence tomography (OCT) videos and fovea-centered images. METHODS: Using the same internal dataset of 1,173 OCT macular videos and fovea-centered images, model development was performed simultaneously but independently by an ophthalmology resident (CFDL models) and a postdoctoral researcher with expertise in AI (bespoke models). We designed a multi-class model to categorize video and fovea-centered images into five labels: normal retina, macular hole, epiretinal membrane, wet age-related macular degeneration and diabetic macular edema. We qualitatively compared point estimates of the performance metrics of the CFDL and bespoke models. RESULTS: For videos, the CFDL model demonstrated excellent discriminative performance, even outperforming the bespoke models for some metrics: area under the precision-recall curve was 0.984 (vs. 0.901), precision and sensitivity were both 94.1% (vs. 94.2%) and accuracy was 94.1% (vs. 96.7%). The fovea-centered CFDL model overall performed better than video-based model and was as accurate as the best bespoke model. CONCLUSION: This comparative study demonstrated that code-free models created by clinicians without coding expertise perform as accurately as expert-designed bespoke models at classifying various retinal pathologies from OCT videos and images. CFDL represents a step forward towards the democratization of AI in medicine, although its numerous limitations must be carefully addressed to ensure its effective application in healthcare.

2.
Ocul Immunol Inflamm ; : 1-7, 2024 Feb 27.
Article in English | MEDLINE | ID: mdl-38411944

ABSTRACT

PURPOSE: Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in detecting and localizing ocular toxoplasmosis (OT) lesions in fundus images and compares it to expert-designed models. METHODS: Ophthalmology trainees without coding experience designed AutoML models using 304 labelled fundus images. We designed a binary model to differentiate OT from normal and an object detection model to visually identify OT lesions. RESULTS: The AutoML model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 100%, specificity of 83% and accuracy of 93.5% (vs. 94%, 86% and 91% for the bespoke models). The AutoML object detection model had an AuPRC of 0.600 with a precision of 93.3% and recall of 56%. Using a diversified external validation dataset, our model correctly labeled 15 normal fundus images (100%) and 15 OT fundus images (100%), with a mean confidence score of 0.965 and 0.963, respectively. CONCLUSION: AutoML models created by ophthalmologists without coding experience were comparable or better than expert-designed bespoke models trained on the same dataset. By creatively using AutoML to identify OT lesions on fundus images, our approach brings the whole spectrum of DL model design into the hands of clinicians.

3.
Br J Ophthalmol ; 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38365427

ABSTRACT

BACKGROUND/AIMS: This study assesses the proficiency of Generative Pre-trained Transformer (GPT)-4 in answering questions about complex clinical ophthalmology cases. METHODS: We tested GPT-4 on 422 Journal of the American Medical Association Ophthalmology Clinical Challenges, and prompted the model to determine the diagnosis (open-ended question) and identify the next-step (multiple-choice question). We generated responses using two zero-shot prompting strategies, including zero-shot plan-and-solve+ (PS+), to improve the reasoning of the model. We compared the best-performing model to human graders in a benchmarking effort. RESULTS: Using PS+ prompting, GPT-4 achieved mean accuracies of 48.0% (95% CI (43.1% to 52.9%)) and 63.0% (95% CI (58.2% to 67.6%)) in diagnosis and next step, respectively. Next-step accuracy did not significantly differ by subspecialty (p=0.44). However, diagnostic accuracy in pathology and tumours was significantly higher than in uveitis (p=0.027). When the diagnosis was accurate, 75.2% (95% CI (68.6% to 80.9%)) of the next steps were correct. Conversely, when the diagnosis was incorrect, 50.2% (95% CI (43.8% to 56.6%)) of the next steps were accurate. The next step was three times more likely to be accurate when the initial diagnosis was correct (p<0.001). No significant differences were observed in diagnostic accuracy and decision-making between board-certified ophthalmologists and GPT-4. Among trainees, senior residents outperformed GPT-4 in diagnostic accuracy (p≤0.001 and 0.049) and in accuracy of next step (p=0.002 and 0.020). CONCLUSION: Improved prompting enhances GPT-4's performance in complex clinical situations, although it does not surpass ophthalmology trainees in our context. Specialised large language models hold promise for future assistance in medical decision-making and diagnosis.

4.
Saudi J Ophthalmol ; 37(3): 200-206, 2023.
Article in English | MEDLINE | ID: mdl-38074296

ABSTRACT

PURPOSE: Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in diagnosing trachoma from field-collected conjunctival images and compares it to expert-designed DL models. METHODS: Two ophthalmology trainees without coding experience carried out AutoML model design using a publicly available image data set of field-collected conjunctival images (1656 labeled images). We designed two binary models to differentiate trachomatous inflammation-follicular (TF) and trachomatous inflammation-intense (TI) from normal. We then integrated an Edge model into an Android application using Google Firebase to make offline diagnoses. RESULTS: The AutoML models showed high diagnostic properties in the classification tasks that were comparable or better than the bespoke DL models. The TF model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 87%, specificity of 88%, and accuracy of 88%. The TI model had an AuPRC of 0.975, sensitivity of 95%, specificity of 92%, and accuracy of 93%. Through the Android app and using an external dataset, the AutoML model had an AuPRC of 0.875, sensitivity of 83%, specificity of 81%, and accuracy of 83%. CONCLUSION: AutoML models created by ophthalmologists without coding experience were comparable or better than bespoke models trained on the same dataset. Using AutoML to create models and edge computing to deploy them into smartphone-based apps, our approach brings the whole spectrum of DL model design into the hands of clinicians. This approach has the potential to democratize access to artificial intelligence.

5.
Br J Ophthalmol ; 2023 Nov 03.
Article in English | MEDLINE | ID: mdl-37923374

ABSTRACT

BACKGROUND: Evidence on the performance of Generative Pre-trained Transformer 4 (GPT-4), a large language model (LLM), in the ophthalmology question-answering domain is needed. METHODS: We tested GPT-4 on two 260-question multiple choice question sets from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions question banks. We compared the accuracy of GPT-4 models with varying temperatures (creativity setting) and evaluated their responses in a subset of questions. We also compared the best-performing GPT-4 model to GPT-3.5 and to historical human performance. RESULTS: GPT-4-0.3 (GPT-4 with a temperature of 0.3) achieved the highest accuracy among GPT-4 models, with 75.8% on the BCSC set and 70.0% on the OphthoQuestions set. The combined accuracy was 72.9%, which represents an 18.3% raw improvement in accuracy compared with GPT-3.5 (p<0.001). Human graders preferred responses from models with a temperature higher than 0 (more creative). Exam section, question difficulty and cognitive level were all predictive of GPT-4-0.3 answer accuracy. GPT-4-0.3's performance was numerically superior to human performance on the BCSC (75.8% vs 73.3%) and OphthoQuestions (70.0% vs 63.0%), but the difference was not statistically significant (p=0.55 and p=0.09). CONCLUSION: GPT-4, an LLM trained on non-ophthalmology-specific data, performs significantly better than its predecessor on simulated ophthalmology board-style exams. Remarkably, its performance tended to be superior to historical human performance, but that difference was not statistically significant in our study.

6.
Int J Med Inform ; 178: 105178, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37657204

ABSTRACT

BACKGROUND AND OBJECTIVE: The detection of retinal diseases using optical coherence tomography (OCT) images and videos is a concrete example of a data classification problem. In recent years, Transformer architectures have been successfully applied to solve a variety of real-world classification problems. Although they have shown impressive discriminative abilities compared to other state-of-the-art models, improving their performance is essential, especially in healthcare-related problems. METHODS: This paper presents an effective technique named model-based transformer (MBT). It is based on popular pre-trained transformer models, particularly, vision transformer, swin transformer for OCT image classification, and multiscale vision transformer for OCT video classification. The proposed approach is designed to represent OCT data by taking advantage of an approximate sparse representation technique. Then, it estimates the optimal features, and performs data classification. RESULTS: The experiments are carried out using three real-world retinal datasets. The experimental results on OCT image and OCT video datasets show that the proposed method outperforms existing state-of-the-art deep learning approaches in terms of classification accuracy, precision, recall, and f1-score, kappa, AUC-ROC, and AUC-PR. It can also boost the performance of existing transformer models, including Vision transformer and Swin transformer for OCT image classification, and Multiscale Vision Transformers for OCT video classification. CONCLUSIONS: This work presents an approach for the automated detection of retinal diseases. Although deep neural networks have proven great potential in ophthalmology applications, our findings demonstrate for the first time a new way to identify retinal pathologies using OCT videos instead of images. Moreover, our proposal can help researchers enhance the discriminative capacity of a variety of powerful deep learning models presented in published papers. This can be valuable for future directions in medical research and clinical practice.

7.
Ophthalmol Sci ; 3(4): 100324, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37334036

ABSTRACT

Purpose: Foundation models are a novel type of artificial intelligence algorithms, in which models are pretrained at scale on unannotated data and fine-tuned for a myriad of downstream tasks, such as generating text. This study assessed the accuracy of ChatGPT, a large language model (LLM), in the ophthalmology question-answering space. Design: Evaluation of diagnostic test or technology. Participants: ChatGPT is a publicly available LLM. Methods: We tested 2 versions of ChatGPT (January 9 "legacy" and ChatGPT Plus) on 2 popular multiple choice question banks commonly used to prepare for the high-stakes Ophthalmic Knowledge Assessment Program (OKAP) examination. We generated two 260-question simulated exams from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions online question bank. We carried out logistic regression to determine the effect of the examination section, cognitive level, and difficulty index on answer accuracy. We also performed a post hoc analysis using Tukey's test to decide if there were meaningful differences between the tested subspecialties. Main Outcome Measures: We reported the accuracy of ChatGPT for each examination section in percentage correct by comparing ChatGPT's outputs with the answer key provided by the question banks. We presented logistic regression results with a likelihood ratio (LR) chi-square. We considered differences between examination sections statistically significant at a P value of < 0.05. Results: The legacy model achieved 55.8% accuracy on the BCSC set and 42.7% on the OphthoQuestions set. With ChatGPT Plus, accuracy increased to 59.4% ± 0.6% and 49.2% ± 1.0%, respectively. Accuracy improved with easier questions when controlling for the examination section and cognitive level. Logistic regression analysis of the legacy model showed that the examination section (LR, 27.57; P = 0.006) followed by question difficulty (LR, 24.05; P < 0.001) were most predictive of ChatGPT's answer accuracy. Although the legacy model performed best in general medicine and worst in neuro-ophthalmology (P < 0.001) and ocular pathology (P = 0.029), similar post hoc findings were not seen with ChatGPT Plus, suggesting more consistent results across examination sections. Conclusion: ChatGPT has encouraging performance on a simulated OKAP examination. Specializing LLMs through domain-specific pretraining may be necessary to improve their performance in ophthalmic subspecialties. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

8.
Br J Ophthalmol ; 107(1): 90-95, 2023 01.
Article in English | MEDLINE | ID: mdl-34344669

ABSTRACT

AIMS: Automated machine learning (AutoML) is a novel tool in artificial intelligence (AI). This study assessed the discriminative performance of AutoML in differentiating retinal vein occlusion (RVO), retinitis pigmentosa (RP) and retinal detachment (RD) from normal fundi using ultra-widefield (UWF) pseudocolour fundus images. METHODS: Two ophthalmologists without coding experience carried out AutoML model design using a publicly available image data set (2137 labelled images). The data set was reviewed for low-quality and mislabeled images and then uploaded to the Google Cloud AutoML Vision platform for training and testing. We designed multiple binary models to differentiate RVO, RP and RD from normal fundi and compared them to bespoke models obtained from the literature. We then devised a multiclass model to detect RVO, RP and RD. Saliency maps were generated to assess the interpretability of the model. RESULTS: The AutoML models demonstrated high diagnostic properties in the binary classification tasks that were generally comparable to bespoke deep-learning models (area under the precision-recall curve (AUPRC) 0.921-1, sensitivity 84.91%-89.77%, specificity 78.72%-100%). The multiclass AutoML model had an AUPRC of 0.876, a sensitivity of 77.93% and a positive predictive value of 82.59%. The per-label sensitivity and specificity, respectively, were normal fundi (91.49%, 86.75%), RVO (83.02%, 92.50%), RP (72.00%, 100%) and RD (79.55%,96.80%). CONCLUSION: AutoML models created by ophthalmologists without coding experience can detect RVO, RP and RD in UWF images with very good diagnostic accuracy. The performance was comparable to bespoke deep-learning models derived by AI experts for RVO and RP but not for RD.


Subject(s)
Artificial Intelligence , Retinal Vein Occlusion , Humans , ROC Curve , Fundus Oculi , Machine Learning , Retina
9.
Br J Ophthalmol ; 107(1): 96-101, 2023 01.
Article in English | MEDLINE | ID: mdl-34362776

ABSTRACT

BACKGROUND/RATIONALE: Artificial intelligence (AI)-based clinical decision support tools, being developed across multiple fields in medicine, need to be evaluated for their impact on the treatment and outcomes of patients as well as optimisation of the clinical workflow. The RAZORBILL study will investigate the impact of advanced AI segmentation algorithms on the disease activity assessment in patients with neovascular age-related macular degeneration (nAMD) by enriching three-dimensional (3D) retinal optical coherence tomography (OCT) scans with automated fluid and layer quantification measurements. METHODS: RAZORBILL is an observational, multicentre, multinational, open-label study, comprising two phases: (a) clinical data collection (phase I): an observational study design, which enforces neither strict visit schedule nor mandated treatment regimen was chosen as an appropriate design to collect data in a real-world clinical setting to enable evaluation in phase II and (b) OCT enrichment analysis (phase II): de-identified 3D OCT scans will be evaluated for disease activity. Within this evaluation, investigators will review the scans once enriched with segmentation results (i.e., highlighted and quantified pathological fluid volumes) and once in its original (i.e., non-enriched) state. This review will be performed using an integrated crossover design, where investigators are used as their own controls allowing the analysis to account for differences in expertise and individual disease activity definitions. CONCLUSIONS: In order to apply novel AI tools to routine clinical care, their benefit as well as operational feasibility need to be carefully investigated. RAZORBILL will inform on the value of AI-based clinical decision support tools. It will clarify if these can be implemented in clinical treatment of patients with nAMD and whether it allows for optimisation of individualised treatment in routine clinical care.


Subject(s)
Refractive Surgical Procedures , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Artificial Intelligence , Retina/diagnostic imaging , Retina/pathology , Algorithms , Observational Studies as Topic
10.
Med Image Anal ; 82: 102608, 2022 11.
Article in English | MEDLINE | ID: mdl-36150271

ABSTRACT

Vision Transformers have recently emerged as a competitive architecture in image classification. The tremendous popularity of this model and its variants comes from its high performance and its ability to produce interpretable predictions. However, both of these characteristics remain to be assessed in depth on retinal images. This study proposes a thorough performance evaluation of several Transformers compared to traditional Convolutional Neural Network (CNN) models for retinal disease classification. Special attention is given to multi-modality imaging (fundus and OCT) and generalization to external data. In addition, we propose a novel mechanism to generate interpretable predictions via attribution maps. Existing attribution methods from Transformer models have the disadvantage of producing low-resolution heatmaps. Our contribution, called Focused Attention, uses iterative conditional patch resampling to tackle this issue. By means of a survey involving four retinal specialists, we validated both the superior interpretability of Vision Transformers compared to the attribution maps produced from CNNs and the relevance of Focused Attention as a lesion detector.


Subject(s)
Algorithms , Retinal Diseases , Humans , Neural Networks, Computer , Fundus Oculi , Retinal Diseases/diagnostic imaging , Retina/diagnostic imaging
11.
Int J Retina Vitreous ; 8(1): 70, 2022 Sep 30.
Article in English | MEDLINE | ID: mdl-36180942

ABSTRACT

BACKGROUND: To evaluate the rate and risk factors of epiretinal membrane (ERM) formation and need for ERM peeling after pars plana vitrectomy (PPV) for uncomplicated primary rhegmatogenous retinal detachment (RRD). METHODS: Retrospective, single-center, cohort study of 119 consecutive patients (119 eyes) that underwent RRD repair using PPV. The primary outcomes were ERM formation, classified using an optical coherence tomography grading system, and the rate of ERM peeling. Visual acuity, postoperative complications, and risk factors for ERM formation and peeling were also identified. RESULTS: Postoperative ERM formation occurred in 69 eyes (58.0%); 56 (47.1%) were stage 1, 9 (7.6%) stage 2, 3 (2.5%) stage 3, and 1 (0.8%) stage 4. Only 6 (5.0%) eyes required secondary PPV for a visually significant ERM, with a mean time to reoperation of 488 ± 351 days. Risk factors for ERM formation included intraoperative cryotherapy, more than 1000 laser shots, 360° laser photocoagulation, and choroidal detachment (p < 0.01). Eyes with more than 3 tears had a trend towards increased ERM surgery (p = 0.10). CONCLUSIONS: Visually significant ERM formation following PPV for primary RRD was uncommon in this cohort (5%). Half of the ERMs were detected after the first post-operative year, indicating that this complication may be underreported in studies with only 1-year follow-up.

12.
Graefes Arch Clin Exp Ophthalmol ; 260(12): 3737-3778, 2022 Dec.
Article in English | MEDLINE | ID: mdl-35857087

ABSTRACT

PURPOSE: This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS: A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS: A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION: The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.


Subject(s)
Deep Learning , Diabetic Retinopathy , Eye Diseases , Retinal Detachment , Humans , Diabetic Retinopathy/diagnosis , Eye Diseases/diagnostic imaging , Research Design
13.
Biomed Opt Express ; 13(2): 850-861, 2022 Feb 01.
Article in English | MEDLINE | ID: mdl-35284163

ABSTRACT

We introduced and validated a method to encase guiding optical coherence tomography (OCT) probes into clinically relevant 36G polyimide subretinal injection (SI) cannulas. Modified SI cannulas presented consistent flow capacity and tolerated the typical mechanical stress encountered in clinical use without significant loss of sensitivity. We also developed an approach that uses a micromanipulator, modified SI cannulas, and an intuitive graphical user interface to enable precise SI. We tested the system using ex-vivo porcine eyes and we found a high SI success ratio 95.0% (95% CI: 83.1-99.4). We also found that 75% of the injected volume ends up at the subretinal space. Finally, we showed that this approach can be applied to transform commercial 40G SI cannulas to guided cannulas. The modified cannulas and guiding approach can enable precise and reproducible SI of novel gene and cell therapies targeting retinal diseases.

14.
Sci Rep ; 12(1): 2398, 2022 02 14.
Article in English | MEDLINE | ID: mdl-35165304

ABSTRACT

This study assessed the performance of automated machine learning (AutoML) in classifying cataract surgery phases from surgical videos. Two ophthalmology trainees without coding experience designed a deep learning model in Google Cloud AutoML Video Classification for the classification of 10 different cataract surgery phases. We used two open-access publicly available datasets (total of 122 surgeries) for model training, validation and testing. External validation was performed on 10 surgeries issued from another dataset. The AutoML model demonstrated excellent discriminating performance, even outperforming bespoke deep learning models handcrafter by experts. The area under the precision-recall curve was 0.855. At the 0.5 confidence threshold cut-off, the overall performance metrics were as follows: sensitivity (81.0%), recall (77.1%), accuracy (96.0%) and F1 score (0.79). The per-segment metrics varied across the surgical phases: precision 66.7-100%, recall 46.2-100% and specificity 94.1-100%. Hydrodissection and phacoemulsification were the most accurately predicted phases (100 and 92.31% correct predictions, respectively). During external validation, the average precision was 54.2% (0.00-90.0%), the recall was 61.1% (0.00-100%) and specificity was 96.2% (91.0-99.0%). In conclusion, a code-free AutoML model can accurately classify cataract surgery phases from videos with an accuracy comparable or better than models developed by experts.


Subject(s)
Cataract Extraction/standards , Lens, Crystalline/surgery , Machine Learning , Ophthalmology/standards , Cataract Extraction/methods , Deep Learning , Humans
15.
Transl Vis Sci Technol ; 10(13): 19, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34767622

ABSTRACT

Purpose: The occurrence of iatrogenic retinal breaks (RB) in pars plana vitrectomy (PPV) is a complication that compromises the overall efficacy of the surgery. A subset of iatrogenic RB occurs when the retina (rather than the vitreous gel) is cut accidentally by the vitrector. We developed a smart vitrector that can detect in real-time potential iatrogenic RB and activate promptly a PPV machine response to prevent them. Methods: We fabricated the smart vitrectors by attaching a miniaturized fiber-based OCT sensor on commercial vitrectors (25G). The system's response time to an iatrogenic RB onset was measured and compared to the literature reported physiologically limited response time of the average surgeon. Two surgeons validated its ability to prevent simulated iatrogenic RB by performing PPV in pigs. Note that the system is meant to control the PPV machine and requires no visual or audio signal interpretation by the surgeons. Results: We found that the response time of the system (28.9 ± 6.5 ms) is 11 times shorter compared to the literature reported physiologically limited reaction time of the average surgeon (P < 0.0001). Ex vivo validation (porcine eyes) showed that the system prevents 78.95% (15/19) (95% confidence interval [CI] 54.43-93.95) of intentional attempts at creating RB, whereas in vivo validation showed that the system, prevents 55.68% (30/54) (95% CI 41.40-69.08), and prevents or mitigates 70.37% (38/54) (95% CI 56.39-82.02) of such attempts. A subset of failures was classified as "early stop" (i.e., false positive), having a prevalence of 5.26% (1 /19) in ex vivo tests and 24.07% (13/54) in in vivo tests. Conclusions: Our results indicate the smart vitrector can prevent iatrogenic RB by providing seamless intraoperative feedback to the PPV machine. Importantly, the use of the smart vitrector requires no modifications of the established PPV procedure. It can mitigate a significant proportion of iatrogenic RB and thus improve the overall efficacy of the surgery. Translational Relevance: Potential clinical adoption of the smart vitrector can reduce the incidence of iatrogenic RB in PPV and thus increase the therapeutic outcome of the surgery.


Subject(s)
Retinal Perforations , Animals , Iatrogenic Disease/prevention & control , Retina , Retinal Perforations/surgery , Swine , Tomography, Optical Coherence , Vitrectomy
17.
Br J Ophthalmol ; 105(3): 392-396, 2021 03.
Article in English | MEDLINE | ID: mdl-32345604

ABSTRACT

BACKGROUND/AIMS: To evaluate the non-invasive measurement of ocular rigidity (OR), an important biomechanical property of the eye, as a predictor of intraocular pressure (IOP) elevation after anti-vascular endothelial growth factor (anti-VEGF) intravitreal injection (IVI). METHODS: Subjects requiring IVI of anti-VEGF for a pre-existing retinal condition were enrolled in this prospective cross-sectional study. OR was assessed in 18 eyes of 18 participants by measurement of pulsatile choroidal volume change using video-rate optical coherence tomography, and pulsatile IOP change using dynamic contour tonometry. IOP was measured using Tono-Pen XL before and immediately following the injection and was correlated with OR. RESULTS: The average increase in IOP following IVI was 19±9 mm Hg, with a range of 7-33 mm Hg. The Spearman correlation coefficient between OR and IOP elevation following IVI was 0.796 (p<0.001), showing higher IOP elevation in more rigid eyes. A regression line was also calculated to predict the IOP spike based on the OR coefficient, such that IOP spike=664.17 mm Hg·µL×OR + 4.59 mm Hg. CONCLUSION: This study shows a strong positive correlation between OR and acute IOP elevation following IVI. These findings indicate that the non-invasive measurement of OR could be an effective tool in identifying patients at risk of IOP spikes following IVI.


Subject(s)
Bevacizumab/administration & dosage , Eye/physiopathology , Intraocular Pressure/physiology , Wet Macular Degeneration/drug therapy , Aged , Angiogenesis Inhibitors/administration & dosage , Cross-Sectional Studies , Elasticity , Female , Humans , Intraocular Pressure/drug effects , Intravitreal Injections , Male , Prospective Studies , Receptors, Vascular Endothelial Growth Factor/antagonists & inhibitors , Wet Macular Degeneration/diagnosis , Wet Macular Degeneration/physiopathology
18.
Sci Rep ; 10(1): 19528, 2020 11 11.
Article in English | MEDLINE | ID: mdl-33177614

ABSTRACT

We aimed to assess the feasibility of machine learning (ML) algorithm design to predict proliferative vitreoretinopathy (PVR) by ophthalmologists without coding experience using automated ML (AutoML). The study was a retrospective cohort study of 506 eyes who underwent pars plana vitrectomy for rhegmatogenous retinal detachment (RRD) by a single surgeon at a tertiary-care hospital between 2012 and 2019. Two ophthalmologists without coding experience used an interactive application in MATLAB to build and evaluate ML algorithms for the prediction of postoperative PVR using clinical data from the electronic health records. The clinical features associated with postoperative PVR were determined by univariate feature selection. The area under the curve (AUC) for predicting postoperative PVR was better for models that included pre-existing PVR as an input. The quadratic support vector machine (SVM) model built using all selected clinical features had an AUC of 0.90, a sensitivity of 63.0%, and a specificity of 97.8%. An optimized Naïve Bayes algorithm that did not include pre-existing PVR as an input feature had an AUC of 0.81, a sensitivity of 54.3%, and a specificity of 92.4%. In conclusion, the development of ML models for the prediction of PVR by ophthalmologists without coding experience is feasible. Input from a data scientist might still be needed to tackle class imbalance-a common challenge in ML classification using real-world clinical data.


Subject(s)
Machine Learning , Postoperative Complications/etiology , Retinal Detachment/surgery , Vitrectomy/adverse effects , Vitreoretinopathy, Proliferative/etiology , Aged , Algorithms , Diagnosis, Computer-Assisted , Female , Humans , Male , Middle Aged , Ophthalmologists , Retrospective Studies , Risk Factors , Vitrectomy/methods
19.
Exp Eye Res ; 190: 107831, 2020 01.
Article in English | MEDLINE | ID: mdl-31606450

ABSTRACT

Ocular rigidity (OR) is thought to play a role in the pathogenesis of glaucoma, but the lack of reliable non-invasive measurements has been a major technical challenge. We recently developed a clinical method using optical coherence tomography time-lapse imaging and automated choroidal segmentation to measure the pulsatile choroidal volume change (ΔV) and calculate OR using Friedenwald's equation. Here we assess the validity and repeatability of this non-invasive technique. We also propose an improved mathematical model of choroidal thickness to extrapolate ΔV from the pulsatile submacular choroidal thickness change more accurately. The new mathematical model uses anatomical data accounting for the choroid thickness near the equator. The validity of the technique was tested by comparing OR coefficients obtained using our non-invasive method (OROCT) and those obtained with an invasive procedure involving intravitreal injections of Bevacizumab (ORIVI) in 12 eyes. Intrasession and intersession repeatability was assessed for 72 and 8 eyes respectively with two consecutive measurements of OR. Using the new mathematical model, we obtained OR values which are closer to those obtained using the invasive procedure and previously reported techniques. A regression line was calculated to predict the ORIVI based on OROCT, such that ORIVI = 0.655 × OROCT. A strong correlation between OROCT and ORIVI was found, with a Spearman coefficient of 0.853 (p < 0.001). The intraclass correlation coefficient for intrasession and intersession repeatability was 0.925, 95% CI [0.881, 0.953] and 0.950, 95% CI [0.763, 0.990] respectively. This confirms the validity and good repeatability of OR measurements using our non-invasive clinical method.


Subject(s)
Choroid/blood supply , Diagnostic Techniques, Ophthalmological , Elasticity/physiology , Glaucoma, Open-Angle/physiopathology , Regional Blood Flow/physiology , Retinal Diseases/physiopathology , Tomography, Optical Coherence/methods , Aged , Angiogenesis Inhibitors/therapeutic use , Bevacizumab/therapeutic use , Biomechanical Phenomena , Choroid/diagnostic imaging , Female , Healthy Volunteers , Humans , Intraocular Pressure/physiology , Intravitreal Injections , Male , Middle Aged , Models, Theoretical , Organ Size , Reproducibility of Results , Retinal Diseases/drug therapy , Tonometry, Ocular , Vascular Endothelial Growth Factor A/antagonists & inhibitors
20.
IEEE Trans Med Imaging ; 38(10): 2434-2444, 2019 10.
Article in English | MEDLINE | ID: mdl-30908197

ABSTRACT

Obtaining the complete segmentation map of retinal lesions is the first step toward an automated diagnosis tool for retinopathy that is interpretable in its decision-making. However, the limited availability of ground truth lesion detection maps at a pixel level restricts the ability of deep segmentation neural networks to generalize over large databases. In this paper, we propose a novel approach for training a convolutional multi-task architecture with supervised learning and reinforcing it with weakly supervised learning. The architecture is simultaneously trained for three tasks: segmentation of red lesions and of bright lesions, those two tasks done concurrently with lesion detection. In addition, we propose and discuss the advantages of a new preprocessing method that guarantees the color consistency between the raw image and its enhanced version. Our complete system produces segmentations of both red and bright lesions. The method is validated at the pixel level and per-image using four databases and a cross-validation strategy. When evaluated on the task of screening for the presence or absence of lesions on the Messidor image set, the proposed method achieves an area under the ROC curve of 0.839, comparable with the state-of-the-art.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Retina/diagnostic imaging , Retinal Diseases/diagnostic imaging , Supervised Machine Learning , Databases, Factual , Diagnostic Techniques, Ophthalmological , Fundus Oculi , Humans , ROC Curve
SELECTION OF CITATIONS
SEARCH DETAIL
...